Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models”

Derek Powell, Kara Weisman, and Ellen M. Markman’s “Articulating Lay Theories Through Graphical Models: A Study of Beliefs Surrounding Vaccination Decisions” (a conference paper from CogSci 2018) represents an exciting advance in marketing research, showing how to use causal graphical models to study why ordinary people have the beliefs they do, and how to intervene to make them be less wrong.

The specific case our authors examine is that of childhood vaccination decisions: some parents don’t give their babies the recommended vaccines, because they’re afraid that vaccines cause autism. (Not true.) This is pretty bad—not only are those unvaccinated kids more likely to get sick themselves, but declining vaccination rates undermine the population’s herd immunity, leading to new outbreaks of highly-contagious diseases like the measles in regions where they were once eradicated.

What’s wrong with these parents, huh?! But that doesn’t have to just be a rhetorical question—Powell et al. show how we can use statistics to make the rhetorical hypophorical and model specifically what’s wrong with these people! Realistically, people aren’t going to just have a raw, “atomic” dislike of vaccination for no reason: parents who refuse to vaccinate their children do so because they’re (irrationally) afraid of giving their kids autism, and not afraid enough of letting their kids get infectious diseases. Nor are beliefs about vaccine effectiveness or side-effects uncaused, but instead depend on other beliefs.

To unravel the structure of the web of beliefs, our authors got Amazon Mechanical Turk participants to take surveys about vaccination-related beliefs, rating statements like “Natural things are always better than synthetic alternatives” or “Parents should trust a doctor’s advice even if it goes against their intuitions” on a 7-point Likert-like scale from “Strongly Agree” to “Strongly Disagree”.

Throwing some off-the-shelf Bayes-net structure-learning software at a training set from the survey data, plus some ancillary assumptions (more-general “theory” beliefs like “skepticism of medical authorities” can cause more-specific “claim” beliefs like “vaccines have harmful additives”, but not vice versa) produces a range of probabilistic models that can be depicted with graphs where nodes representing the different beliefs are connected by arrows that show which beliefs “cause” others: an arrow from a naturalism node (in this context, denoting a worldview that prefers natural over synthetic things) to a parental expertise node means that people think parents know best because they think that nature is good, not the other way around.

Learning these kinds of models is feasible because not all possible causal relationships are consistent with the data: if and are statistically independent of each other, but each dependent with (and are conditionally dependent given the value of ), it’s kind of hard to make sense of this except to posit that and are causes with the common effect .

Simpler models with fewer arrows might sacrifice a little bit of predictive accuracy for the benefit of being more intelligible to humans. Powell et al. ended up choosing a model that can predict responses from the test set at r = .825, explaining 68.1% of the variance. Not bad?!—check out the full 14-node graph in Figure 2 on page 4 of the PDF.

Causal graphs are useful as a guide for planning interventions: the graph encodes predictions about what would happen if you changed some of the variables. Our authors point out that since previous work showed that people’s beliefs about vaccine dangers were difficult to influence, that suggests trying to intervene on the other parents of the intent-to-vaccinate node in the model: if the hoi polloi won’t listen to you when you tell them the costs are minimal (vaccines are safe), instead tell them about the benefits (diseases are really bad and vaccines prevent disease).

To make sure I really understand this, I want to adapt it into a simpler example with made-up numbers where I can do the arithmetic myself. Let me consider a graph with just three nodes—

vaccines are safe → vaccinate against measles ← measles are dangerous

Suppose this represents a structural equation model where an anti-vaxxer-leaning parent-to-be’s propensity-to-vaccinate-against-measles is expressed in terms of belief-in-vaccine-safety and belief-in-measles-danger as—

And suppose that we’re a public health authority trying to decide whether to spend our budget (or what’s left of it after recent funding cuts) on a public education initiative that will increase by 0.1, or one that will increase by 0.3.

We should choose the program that intervenes on , because is bigger than . That’s actionable advice that we couldn’t have derived without a quantitative model of how the lay audience thinks. Exciting!

At this point, some readers may be wondering why I’ve described this work as “marketing research” about constructing “optimized propaganda.” A couple of those words usually have negative connotations, but educating people about the importance of vaccines is a positive thing. What gives?

The thing is, “Learn the causal graph of why they think that and compute how to intervene on it to make them think something else” is a symmetric weapon—a fully general persuasive technique that doesn’t depend on whether the thing you’re trying to convince them of is true.

In my simplified example, the choice to intervene on was based on numerical assumptions that amount to the claim that it’s sufficiently easier to change than it is to change , such that intervening on is more effective at changing than intervening on (even though depends on more than it does on ). But this methodology is completely indifferent to what , , and mean. It would have worked just as well, and for the same reasons if the graph had been—

Coca-Cola isn't unhealthy → drink Coca-Cola ← Coca-Cola tastes great

Suppose that we’re advertising executives for the Coca-Cola Company trying to decide how to spend our budget (or what’s left of it after recent funding cuts). If consumers won’t listen to us when we tell them the costs of drinking Coke are minimal (lying that it isn’t unhealthy), we should instead tell them about the benefits (Coke tastes good).

Or with different assumptions about the parameters—maybe actually—then intervening to increase belief in “Coca-Cola isn’t unhealthy” would be the right move (because ). The marketing algorithm that just computes what belief changes will flip the decision node, doesn’t have any way to notice or care whether those belief changes are in the direction of more or less accuracy.

To be clear—and I really shouldn’t have to say this—this is not a criticism of Powell–Weisman–Markman’s research! The “Learn the causal graph of why they think that” methodology is genuinely really cool! It doesn’t have to be deployed as a marketing algorithm: the process of figuring out which belief change would flip some downstream node is the same thing as what we call locating a crux.[1] The difference is just a matter of forwards or backwards direction: whether you first figure out if the measles vaccine or Coca-Cola are safe and then use whatever answer you come up with to guide your decision, or whether you write the bottom line first.

Of course, most people on most issues don’t have the time or expertise to do their own research. For the most part, we can only hope that the sources we trust as authorities are doing their best to use their limited bandwidth to keep us genuinely informed, rather than merely computing what signals to emit in order to control our decisions.

If that’s not true, we might be in trouble—perhaps increasingly so, if technological developments grant new advantages to the propagation of disinformation over the discernment of truth. In a possible future world where most words are produced by AIs running a “Learn the causal graph of why they think that and intervene on it to make them think something else” algorithm hooked up to a next-generation GPT, even reading plain text from an untrusted source could be dangerous.


  1. ↩︎

    Thanks to Anna Salamon for this observation.